31 research outputs found

    Environmental Anchoring of Head Direction in a Computational Model of Retrosplenial Cortex

    Get PDF
    Allocentric (world-centered) spatial codes driven by path integration accumulate error unless reset by environmental sensory inputs that are necessarily egocentric (body-centered). Previous models of the head direction system avoided the necessary transformation between egocentric and allocentric reference frames by placing visual cues at infinity. Here we present a model of head direction coding that copes with exclusively proximal cues by making use of a conjunctive representation of head direction and location in retrosplenial cortex. Egocentric landmark bearing of proximal cues, which changes with location, is mapped onto this retrosplenial representation. The model avoids distortions due to parallax, which occur in simple models when a single proximal cue card is used, and can also accommodate multiple cues, suggesting how it can generalize to arbitrary sensory environments. It provides a functional account of the anatomical distribution of head direction cells along Papez' circuit, of place-by-direction coding in retrosplenial cortex, the anatomical connection from the anterior thalamic nuclei to retrosplenial cortex, and the involvement of retrosplenial cortex in navigation. In addition to parallax correction, the same mechanism allows for continuity of head direction coding between connected environments, and shows how a head direction representation can be stabilized by a single within arena cue. We also make predictions for drift during exploration of a new environment, the effects of hippocampal lesions on retrosplenial cells, and on head direction coding in differently shaped environments. SIGNIFICANCE STATEMENT: The activity of head direction cells signals the direction of an animal's head relative to landmarks in the world. Although driven by internal estimates of head movements, head direction cells must be kept aligned to the external world by sensory inputs, which arrive in the reference frame of the sensory receptors. We present a computational model, which proposes that sensory inputs are correctly associated to head directions by virtue of a conjunctive representation of place and head directions in the retrosplenial cortex. The model allows for a stable head direction signal, even when the sensory input from nearby cues changes dramatically whenever the animal moves to a different location, and enables stable representations of head direction across connected environments

    Neuronal vector coding in spatial cognition

    Get PDF
    Several types of neurons involved in spatial navigation and memory encode the distance and direction (that is, the vector) between an agent and items in its environment. Such vectorial information provides a powerful basis for spatial cognition by representing the geometric relationships between the self and the external world. Here, we review the explicit encoding of vectorial information by neurons in and around the hippocampal formation, far from the sensory periphery. The parahippocampal, retrosplenial and parietal cortices, as well as the hippocampal formation and striatum, provide a plethora of examples of vector coding at the single neuron level. We provide a functional taxonomy of cells with vectorial receptive fields as reported in experiments and proposed in theoretical work. The responses of these neurons may provide the fundamental neural basis for the (bottom-up) representation of environmental layout and (top-down) memory-guided generation of visuospatial imagery and navigational planning

    A model of head direction and landmark coding in complex environments

    Get PDF
    Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja's Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions

    How environment and self-motion combine in neural representations of space

    Get PDF
    Estimates of location or orientation can be constructed solely from sensory information representing environmental cues. In unfamiliar or sensory-poor environments, these estimates can also be maintained and updated by integrating self-motion information. However, the accumulation of error dictates that updated representations of heading direction and location become progressively less reliable over time, and must be corrected by environmental sensory inputs when available. Anatomical, electrophysiological and behavioural evidence indicates that angular and translational path integration contributes to the firing of head direction cells and grid cells. We discuss how sensory inputs may be combined with self-motion information in the firing patterns of these cells. For head direction cells, direct projections from egocentric sensory representations of distal cues can help to correct cumulative errors. Grid cells may benefit from sensory inputs via boundary vector cells and place cells. However, the allocentric code of boundary vector cells and place cells requires consistent head-direction information in order to translate the sensory signal of egocentric boundary distance into allocentric boundary vector cell firing, suggesting that the different spatial representations found in and around the hippocampal formation are interdependent. We conclude that, rather than representing pure path integration, the firing of head-direction cells and grid cells reflects the interface between self-motion and environmental sensory information. Together with place cells and boundary vector cells they can support a coherent unitary representation of space based on both environmental sensory inputs and path integration signals

    A neural-level model of spatial memory and imagery

    Get PDF
    We present a model of how neural representations of egocentric spatial experiences in parietal cortex interface with viewpoint-independent representations in medial temporal areas, via retrosplenial cortex, to enable many key aspects of spatial cognition. This account shows how previously reported neural responses (place, head-direction and grid cells, allocentric boundary- and object-vector cells, gain-field neurons) can map onto higher cognitive function in a modular way, and predicts new cell types (egocentric and head-direction-modulated boundary- and object-vector cells). The model predicts how these neural populations should interact across multiple brain regions to support spatial memory, scene construction, novelty-detection, 'trace cells', and mental navigation. Simulated behavior and firing rate maps are compared to experimental data, for example showing how object-vector cells allow items to be remembered within a contextual representation based on environmental boundaries, and how grid cells could update the viewpoint in imagery during planning and short-cutting by driving sequential place cell activity

    A Computational Model of Visual Recognition Memory via Grid Cells

    Get PDF
    Models of face, object, and scene recognition traditionally focus on massively parallel processing of low-level features, with higher-order representations emerging at later processing stages. However, visual perception is tightly coupled to eye movements, which are necessarily sequential. Recently, neurons in entorhinal cortex have been reported with grid cell-like firing in response to eye movements, i.e., in visual space. Following the presumed role of grid cells in vector navigation, we propose a model of recognition memory for familiar faces, objects, and scenes, in which grid cells encode translation vectors between salient stimulus features. A sequence of saccadic eye-movement vectors, moving from one salient feature to the expected location of the next, potentially confirms an initial hypothesis (accumulating evidence toward a threshold) about stimulus identity, based on the relative feature layout (i.e., going beyond recognition of individual features). The model provides an explicit neural mechanism for the long-held view that directed saccades support hypothesis-driven, constructive perception and recognition; is compatible with holistic face processing; and constitutes the first quantitative proposal for a role of grid cells in visual recognition. The variance of grid cell activity along saccade trajectories exhibits 6-fold symmetry across 360 degrees akin to recently reported fMRI data. The model suggests that disconnecting grid cells from occipitotemporal inputs may yield prosopagnosia-like symptoms. The mechanism is robust with regard to partial visual occlusion, can accommodate size and position invariance, and suggests a functional explanation for medial temporal lobe involvement in visual memory for relational information and memory-guided attention

    Differential influences of environment and self-motion on place and grid cell firing

    Get PDF
    Place and grid cells in the hippocampal formation provide foundational representations of environmental location, and potentially of locations within conceptual spaces. Some accounts predict that environmental sensory information and self-motion are encoded in complementary representations, while other models suggest that both features combine to produce a single coherent representation. Here, we use virtual reality to dissociate visual environmental from physical motion inputs, while recording place and grid cells in mice navigating virtual open arenas. Place cell firing patterns predominantly reflect visual inputs, while grid cell activity reflects a greater influence of physical motion. Thus, even when recorded simultaneously, place and grid cell firing patterns differentially reflect environmental information (or ‘states’) and physical self-motion (or ‘transitions’), and need not be mutually coherent

    A neural-level model of spatial memory and imagery

    No full text
    corecore